Recency-weighted Markovian inference

نویسنده

  • Kristjan Kalm
چکیده

We describe a Markov latent state space (MLSS) model, where the latent state distribution is a decaying mixture over multiple past states. We present a simple sampling algorithm that allows to approximate such high-order MLSS with fixed time and memory costs.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ROBUSTNESS OF THE TRIPLE IMPLICATION INFERENCE METHOD BASED ON THE WEIGHTED LOGIC METRIC

This paper focuses on the robustness problem of full implication triple implication inference method for fuzzy reasoning. First of all, based on strong regular implication, the weighted logic metric for measuring distance between two fuzzy sets is proposed. Besides, under this metric, some robustness results of the triple implication method are obtained, which demonstrates that the triple impli...

متن کامل

Memory in Chains: Modeling Primacy and Recency Effects in Memory for Order

Memory for order is fundamental in everyday cognition, supporting basic processes like causal inference. However, theories of order memory are narrower, if anything, than theories of memory generally. The memory-in-chains (MIC) model improves on existing theories by explaining a family of order memory effects, by explaining more processes, and by making strong predictions. This paper examines t...

متن کامل

Recency, Consistent Learning, and Nash Equilibrium Learning with Recency Bias

We examine the long-run implication of two models of learning with recency bias: recursive weights and limited memory. We show that both models generate similar beliefs, and that both have a weighted universal consistency property. Using the limited memory model we are able to produce learning procedures that are both weighted universally consistent and converge with probability one to strict N...

متن کامل

Recency, consistent learning, and Nash equilibrium.

We examine the long-term implication of two models of learning with recency bias: recursive weights and limited memory. We show that both models generate similar beliefs and that both have a weighted universal consistency property. Using the limited-memory model we produce learning procedures that both are weighted universally consistent and converge with probability one to strict Nash equilibr...

متن کامل

Learning with Recency Bias

We examine the long-run implication of two models of learning with recency bias: recursive weights and limited memory. We show that both models generate similar beliefs, and that both have a weighted universal consistency property. Using the limited memory model we are able to produce learning procedures that are both weighted universally consistent and converge with probability one to strict N...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1711.03038  شماره 

صفحات  -

تاریخ انتشار 2017